5 research outputs found
Recommended from our members
Recall of a live and personally experienced eyewitness event by adults with autism spectrum disorder
The aim of the present study was to (a) extend previous eyewitness research in autism spectrum disorder (ASD) using a live and personally experienced event; (b) examine whether witnesses with ASD demonstrate a facilitative effect in memory for self- over other-performed actions; (c) explore source monitoring abilities by witnesses with ASD in discriminating who performed which actions within the event. Eighteen high-functioning adults with ASD and 18 age- and IQ-matched typical counterparts participated in a live first aid scenario in which they and the experimenter each performed a number of actions. Participants were subsequently interviewed for their memory of the event using a standard interview procedure with free recall followed by questioning. The ASD group recalled just as many correct details as the comparison group from the event overall, however they made more errors. This was the case across both free recall and questioning phases. Both groups showed a self-enactment effect across both interview phases, recalling more actions that they had performed themselves than actions that the experimenter had performed. However, the ASD group were more likely than their typical comparisons to confuse the source of self-performed actions in free recall, but not in questioning, which may indicate executive functioning difficulties with unsupported test procedures. Findings are discussed in terms of their theoretical and practical implications
Mightability: A Multi-State Visuo-Spatial Reasoning for Human-Robot Interaction
International audienceWe, the Humans, are capable of estimating various abilities of ourselves and of the person we are interacting with. Visibility and reachability are among two such abilities. Studies in neuroscience and psychology suggest that from the age of 12-15 months children start to understand the occlusion of others line-of-sight and from the age of 3 years they start to develop the ability, termed as perceived reachability for self and for others. As such capabilities evolve in the children, they start showing intuitive and proactive behavior by perceiving various abilities of the human partner. Inspired from such studies, which suggest that visuo-spatial perception plays an important role in Human-Human interaction, we propose to equip our robot to perceive various types of abilities of the agents in the workspace. The robot perceives such abilities not only from the current state of the agent but also by virtually putting an agent into various achievable states, such as turn left, stand up, etc. As the robot estimates what an agent might be able to 'see' and 'reach' if will be in a particular state, we term such analyses as Mightability Analyses. Currently the robot is equipped to perform such Mightability analyses at two levels: cells in the 3D grid and objects in the space, which we termed as Mightability Maps (MM) and Object Oriented Mightabilities (OOM) respectively. We have shown the applications of Mightability analyses in performing various cooperative tasks like show and make an object accessible to the human as well as competitive tasks like hide and put away an object from the human. Such Mightability analyses equip the robot for higher-level learning and decisional capabilities as well as could facilitate the robot for better verbalize interaction and proactive behavior